Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Commun ; 13(1): 2571, 2022 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-35546144

RESUMO

Many real-world mission-critical applications require continual online learning from noisy data and real-time decision making with a defined confidence level. Brain-inspired probabilistic models of neural network can explicitly handle the uncertainty in data and allow adaptive learning on the fly. However, their implementation in a compact, low-power hardware remains a challenge. In this work, we introduce a novel hardware fabric that can implement a new class of stochastic neural network called Neural Sampling Machine (NSM) by exploiting the stochasticity in the synaptic connections for approximate Bayesian inference. We experimentally demonstrate an in silico hybrid stochastic synapse by pairing a ferroelectric field-effect transistor (FeFET)-based analog weight cell with a two-terminal stochastic selector element. We show that the stochastic switching characteristic of the selector between the insulator and the metallic states resembles the multiplicative synaptic noise of the NSM. We perform network-level simulations to highlight the salient features offered by the stochastic NSM such as performing autonomous weight normalization for continual online learning and Bayesian inferencing. We show that the stochastic NSM can not only perform highly accurate image classification with 98.25% accuracy on standard MNIST dataset, but also estimate the uncertainty in prediction (measured in terms of the entropy of prediction) when the digits of the MNIST dataset are rotated. Building such a probabilistic hardware platform that can support neuroscience inspired models can enhance the learning and inference capability of the current artificial intelligence (AI).


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Teorema de Bayes , Encéfalo , Sinapses
2.
Neural Comput ; 33(8): 2241-2273, 2021 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-34310672

RESUMO

We propose a variation of the self-organizing map algorithm by considering the random placement of neurons on a two-dimensional manifold, following a blue noise distribution from which various topologies can be derived. These topologies possess random (but controllable) discontinuities that allow for a more flexible self-organization, especially with high-dimensional data. The proposed algorithm is tested on one-, two- and three-dimensional tasks, as well as on the MNIST handwritten digits data set and validated using spectral analysis and topological data analysis tools. We also demonstrate the ability of the randomized self-organizing map to gracefully reorganize itself in case of neural lesion and/or neurogenesis.


Assuntos
Algoritmos , Redes Neurais de Computação , Neurônios
3.
J Math Neurosci ; 10(1): 20, 2020 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-33259016

RESUMO

We provide theoretical conditions guaranteeing that a self-organizing map efficiently develops representations of the input space. The study relies on a neural field model of spatiotemporal activity in area 3b of the primary somatosensory cortex. We rely on Lyapunov's theory for neural fields to derive theoretical conditions for stability. We verify the theoretical conditions by numerical experiments. The analysis highlights the key role played by the balance between excitation and inhibition of lateral synaptic coupling and the strength of synaptic gains in the formation and maintenance of self-organizing maps.

4.
Front Neurosci ; 13: 357, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31110470

RESUMO

Spike-Timing-Dependent Plasticity (STDP) is a bio-inspired local incremental weight update rule commonly used for online learning in spike-based neuromorphic systems. In STDP, the intensity of long-term potentiation and depression in synaptic efficacy (weight) between neurons is expressed as a function of the relative timing between pre- and post-synaptic action potentials (spikes), while the polarity of change is dependent on the order (causality) of the spikes. Online STDP weight updates for causal and acausal relative spike times are activated at the onset of post- and pre-synaptic spike events, respectively, implying access to synaptic connectivity both in forward (pre-to-post) and reverse (post-to-pre) directions. Here we study the impact of different arrangements of synaptic connectivity tables on weight storage and STDP updates for large-scale neuromorphic systems. We analyze the memory efficiency for varying degrees of density in synaptic connectivity, ranging from crossbar arrays for full connectivity to pointer-based lookup for sparse connectivity. The study includes comparison of storage and access costs and efficiencies for each memory arrangement, along with a trade-off analysis of the benefits of each data structure depending on application requirements and budget. Finally, we present an alternative formulation of STDP via a delayed causal update mechanism that permits efficient weight access, requiring no more than forward connectivity lookup. We show functional equivalence of the delayed causal updates to the original STDP formulation, with substantial savings in storage and access costs and efficiencies for networks with sparse synaptic connectivity as typically encountered in large-scale models in computational neuroscience.

5.
Neural Netw ; 114: 1-14, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30831378

RESUMO

Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the free phase, where the data are fed to the network, and a clamped phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain so far. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning.


Assuntos
Retroalimentação , Aprendizado de Máquina , Redes Neurais de Computação
6.
Front Neurosci ; 12: 583, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30210274

RESUMO

Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, most neuromorphic hardware are trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.

7.
Front Neurosci ; 11: 324, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28680387

RESUMO

An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

8.
PeerJ Comput Sci ; 3: e142, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-34722870

RESUMO

Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.

9.
Front Neurosci ; 9: 237, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26217171

RESUMO

Several disorders are related to pathological brain oscillations. In the case of Parkinson's disease, sustained low-frequency oscillations (especially in the ß-band, 13-30 Hz) correlate with motor symptoms. It is still under debate whether these oscillations are the cause of parkinsonian motor symptoms. The development of techniques enabling selective disruption of these ß-oscillations could contribute to the understanding of the underlying mechanisms, and could be exploited for treatments. A particularly appealing technique is Deep Brain Stimulation (DBS). With clinical electrical DBS, electrical currents are delivered at high frequency to a region made of potentially heterogeneous neurons (the subthalamic nucleus (STN) in the case of Parkinson's disease). Even more appealing is DBS with optogenetics, which is until now a preclinical method using both gene transfer and deep brain light delivery and enabling neuromodulation at the scale of one given neural network. In this work, we rely on delayed neural fields models of STN and the external Globus Pallidus (GPe) to develop, theoretically validate and test in silico a closed-loop stimulation strategy to disrupt these sustained oscillations with optogenetics. First, we rely on tools from control theory to provide theoretical conditions under which sustained oscillations can be attenuated by a closed-loop stimulation proportional to the measured activity of STN. Second, based on this theoretical framework, we show numerically that the proposed closed-loop stimulation efficiently attenuates sustained oscillations, even in the case when the photosensitization effectively affects only 50% of STN neurons. We also show through simulations that oscillations disruption can be achieved when the same light source is used for the whole STN population. We finally test the robustness of the proposed strategy to possible acquisition and processing delays, as well as parameters uncertainty.

10.
Artigo em Inglês | MEDLINE | ID: mdl-25120461

RESUMO

In a previous work, we introduced a computational model of area 3b which is built upon the neural field theory and receives input from a simplified model of the index distal finger pad populated by a random set of touch receptors (Merkell cells). This model has been shown to be able to self-organize following the random stimulation of the finger pad model and to cope, to some extent, with cortical or skin lesions. The main hypothesis of the model is that learning of skin representations occurs at the thalamo-cortical level while cortico-cortical connections serve a stereotyped competition mechanism that shapes the receptive fields. To further assess this hypothesis and the validity of the model, we reproduced in this article the exact experimental protocol of DiCarlo et al. that has been used to examine the structure of receptive fields in area 3b of the primary somatosensory cortex. Using the same analysis toolset, the model yields consistent results, having most of the receptive fields to contain a single region of excitation and one to several regions of inhibition. We further proceeded our study using a dynamic competition that deeply influences the formation of the receptive fields. We hypothesized this dynamic competition to correspond to some form of somatosensory attention that may help to precisely shape the receptive fields. To test this hypothesis, we designed a protocol where an arbitrary region of interest is delineated on the index distal finger pad and we either (1) instructed explicitly the model to attend to this region (simulating an attentional signal) (2) preferentially trained the model on this region or (3) combined the two aforementioned protocols simultaneously. Results tend to confirm that dynamic competition leads to shrunken receptive fields and its joint interaction with intensive training promotes a massive receptive fields migration and shrinkage.

11.
PLoS One ; 7(7): e40257, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22808127

RESUMO

We investigate the formation and maintenance of ordered topographic maps in the primary somatosensory cortex as well as the reorganization of representations after sensory deprivation or cortical lesion. We consider both the critical period (postnatal) where representations are shaped and the post-critical period where representations are maintained and possibly reorganized. We hypothesize that feed-forward thalamocortical connections are an adequate site of plasticity while cortico-cortical connections are believed to drive a competitive mechanism that is critical for learning. We model a small skin patch located on the distal phalangeal surface of a digit as a set of 256 Merkel ending complexes (MEC) that feed a computational model of the primary somatosensory cortex (area 3b). This model is a two-dimensional neural field where spatially localized solutions (a.k.a. bumps) drive cortical plasticity through a Hebbian-like learning rule. Simulations explain the initial formation of ordered representations following repetitive and random stimulations of the skin patch. Skin lesions as well as cortical lesions are also studied and results confirm the possibility to reorganize representations using the same learning rule and depending on the type of the lesion. For severe lesions, the model suggests that cortico-cortical connections may play an important role in complete recovery.


Assuntos
Mapeamento Encefálico , Modelos Neurológicos , Córtex Somatossensorial/fisiopatologia , Estimulação Física , Reprodutibilidade dos Testes , Pele/patologia , Pele/fisiopatologia , Córtex Somatossensorial/patologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...